Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract Energy efficiency in computation is ultimately limited by noise, with quantum limits setting the fundamental noise floor. Analog physical neural networks hold promise for improved energy efficiency compared to digital electronic neural networks. However, they are typically operated in a relatively high-power regime so that the signal-to-noise ratio (SNR) is large (>10), and the noise can be treated as a perturbation. We study optical neural networks where all layers except the last are operated in the limit that each neuron can be activated by just a single photon, and as a result the noise on neuron activations is no longer merely perturbative. We show that by using a physics-based probabilistic model of the neuron activations in training, it is possible to perform accurate machine-learning inference in spite of the extremely high shot noise (SNR ~ 1). We experimentally demonstrated MNIST handwritten-digit classification with a test accuracy of 98% using an optical neural network with a hidden layer operating in the single-photon regime; the optical energy used to perform the classification corresponds to just 0.038 photons per multiply-accumulate (MAC) operation. Our physics-aware stochastic training approach might also prove useful with non-optical ultra-low-power hardware.more » « lessFree, publicly-accessible full text available December 1, 2026
-
Abstract High-bandwidth applications, from multi-gigabit communication and high-performance computing to radar signal processing, demand ever-increasing processing speeds. However, they face limitations in signal sampling and computation due to hardware and power constraints. In the microwave regime, where operating frequencies exceed the fastest clock rates, direct sampling becomes difficult, prompting interest in neuromorphic analog computing systems. We present the first demonstration of direct broadband frequency domain computing using an integrated circuit that replaces traditional analog and digital interfaces. This features a Microwave Neural Network (MNN) that operates on signals spanning tens of gigahertz, yet reprogrammed with slow, 150 MBit/sec control bitstreams. By leveraging significant nonlinearity in coupled microwave oscillators, features learned from a wide bandwidth are encoded in a comb-like spectrum spanning only a few gigahertz, enabling easy inference. We find that the MNN can search for bit sequences in arbitrary, ultra-broadband10 GBit/sec digital data, demonstrating suitability for high-speed wireline communication.Notably, it can emulate high-level digital functions without custom on-chip circuits, potentially replacing power-hungry sequential logic architectures. Its ability to track frequency changes over long capture times also allows for determining flight trajectories from radar returns. Furthermore, it serves as an accelerator for radio-frequency machine learning, capable of accurately classifying various encoding schemes used in wireless communication. The MNN achieves true, reconfigurable broadband computation, which has not yet been demonstrated by classical analog modalities, quantum reservoir computers using superconducting circuits, or photonic tensor cores, and avoidsthe inefficiencies of electro-optic transduction. Its sub-wavelength footprint in a Complementary Metal-Oxide-Semiconductor process and sub-200 milliwatt power consumption enable seamless integration as a general-purpose analog neural processor in microwave and digital signal processing chips.more » « lessFree, publicly-accessible full text available January 10, 2026
-
Free, publicly-accessible full text available November 1, 2025
-
Abstract Deep learning has become a widespread tool in both science and industry. However, continued progress is hampered by the rapid growth in energy costs of ever-larger deep neural networks. Optical neural networks provide a potential means to solve the energy-cost problem faced by deep learning. Here, we experimentally demonstrate an optical neural network based on optical dot products that achieves 99% accuracy on handwritten-digit classification using ~3.1 detected photons per weight multiplication and ~90% accuracy using ~0.66 photons (~2.5 × 10 −19 J of optical energy) per weight multiplication. The fundamental principle enabling our sub-photon-per-multiplication demonstration—noise reduction from the accumulation of scalar multiplications in dot-product sums—is applicable to many different optical-neural-network architectures. Our work shows that optical neural networks can achieve accurate results using extremely low optical energies.more » « less
An official website of the United States government
